Implementing OpenSHMEM Using MPI-3 One-Sided Communication
نویسندگان
چکیده
This paper reports the design and implementation of OpenSHMEM over MPI using new one-sided communication features in MPI3, which include not only new functions (e.g. remote atomics) but also a new memory model that is consistent with that of SHMEM. We use a new, non-collective MPI communicator creation routine to allow SHMEM collectives to use their MPI counterparts. Finally, we leverage MPI sharedmemory windows within a node, which allows direct (load-store) access. Performance evaluations are conducted for shared-memory and InfiniBand conduits using microbenchmarks.
منابع مشابه
Implementing OpenSHMEM for the Adapteva Epiphany RISC Array Processor
The energy-efficient Adapteva Epiphany architecture exhibits massive many-core scalability in a physically compact 2D array of RISC cores with a fast network-on-chip (NoC). With fully divergent cores capable of MIMD execution, the physical topology and memory-mapped capabilities of the core and network translate well to partitioned global address space (PGAS) parallel programming models. Follow...
متن کاملA Study of the Bucket-Exchange Pattern in the PGAS Model Using the ISx Integer Sort Mini-Application
ISx is an open-source integer sort mini-application that was released as a research vehicle for the study of irregular all-to-all communication patterns. The mini-app is uniquely valuable to the PGAS community because its dynamic, data dependent communication pattern presents an opportunity to show the benefit of the PGAS abstraction and one-sided communication. Its original release featured an...
متن کاملFrom MPI to OpenSHMEM: Porting LAMMPS
This work details the opportunities and challenges of porting a petascale-capable, MPI-based application LAMMPS to OpenSHMEM. We investigate the major programming challenges stemming from the differences in communication semantics, address space organization, and synchronization operations between the two programming models. This work provides several approaches to solve those challenges for re...
متن کاملOptimizing Collective Communication in OpenSHMEM
Message Passing Interface (MPI) has been the de-facto programming model for scientific parallel applications. However, data driven applications with irregular communication patterns are harder to implement using MPI. The Partitioned Global Address Space (PGAS) programming models present an alternative approach to improve programmability. OpenSHMEM is a library-based implementation of the PGAS m...
متن کاملHybrid Programming Using OpenSHMEM and OpenACC
With high performance systems exploiting multicore and accelerator-based architectures on a distributed shared memory system, heterogenous hybrid programming models are the natural choice to exploit all the hardware made available on these systems. Previous efforts looking into hybrid models have primarily focused on using OpenMP directives (for shared memory programming) with MPI (for inter-no...
متن کامل